71 research outputs found

    Convergent communication, sensing and localization in 6g systems: An overview of technologies, opportunities and challenges

    Get PDF
    Herein, we focus on convergent 6G communication, localization and sensing systems by identifying key technology enablers, discussing their underlying challenges, implementation issues, and recommending potential solutions. Moreover, we discuss exciting new opportunities for integrated localization and sensing applications, which will disrupt traditional design principles and revolutionize the way we live, interact with our environment, and do business. Regarding potential enabling technologies, 6G will continue to develop towards even higher frequency ranges, wider bandwidths, and massive antenna arrays. In turn, this will enable sensing solutions with very fine range, Doppler, and angular resolutions, as well as localization to cm-level degree of accuracy. Besides, new materials, device types, and reconfigurable surfaces will allow network operators to reshape and control the electromagnetic response of the environment. At the same time, machine learning and artificial intelligence will leverage the unprecedented availability of data and computing resources to tackle the biggest and hardest problems in wireless communication systems. As a result, 6G will be truly intelligent wireless systems that will provide not only ubiquitous communication but also empower high accuracy localization and high-resolution sensing services. They will become the catalyst for this revolution by bringing about a unique new set of features and service capabilities, where localization and sensing will coexist with communication, continuously sharing the available resources in time, frequency, and space. This work concludes by highlighting foundational research challenges, as well as implications and opportunities related to privacy, security, and trust

    Stainless steel weld metal designed to mitigate residual stresses

    Get PDF
    There have been considerable efforts to create welding consumables which on solid state phase transformation partly compensate for the stresses which develop when a constrained weld cools to ambient temperatures. All of these efforts have focused on structural steels which are ferritic. In the present work, alloy design methods have been used to create a stainless steel welding consumable which solidifies as δ ferrite, transforms almost entirely into austenite which then undergoes martensitic transformation at a low temperature of about 220◦C. At the same time, the carbon concentration has been kept to a minimum to avoid phenomena such as sensitisation. The measured mechanical properties, especially toughness, seem to be significantly better than commercially available martensitic stainless steel welding consumables, and it has been demonstrated that the use of the new alloy reduces distortion in the final joint

    Learning discriminative models from structured multi-sensor data for human context recognition

    No full text
    Abstract In this work, statistical machine learning and pattern recognition methods were developed and applied to sensor-based human context recognition. More precisely, we concentrated on an effective discriminative learning framework, where input-output mapping is learned directly from a labeled dataset. Non-parametric discriminative classification and regression models based on kernel methods were applied. They include support vector machines (SVM) and Gaussian processes (GP), which play a central role in modern statistical machine learning. Based on these established models, we propose various extensions for handling structured data that usually arise from real-life applications, for example, in a field of context-aware computing. We applied both SVM and GP techniques to handle data with multiple classes in a structured multi-sensor domain. Moreover, a framework for combining data from several sources in this setting was developed using multiple classifiers and fusion rules, where kernel methods are used as base classifiers. We developed two novel methods for handling sequential input and output data. For sequential time-series data, a novel kernel based on graphical presentation, called a weighted walk-based graph kernel (WWGK), is introduced. For sequential output labels, discriminative temporal smoothing (DTS) is proposed. Again, the proposed algorithms are modular, so different kernel classifiers can be used as base models. Finally, we propose a group of techniques based on Gaussian process regression (GPR) and particle filtering (PF) to learn to track multiple targets. We applied the proposed methodology to three different human-motion-based context recognition applications: person identification, person tracking, and activity recognition, where floor (pressure-sensitive and binary switch) and wearable acceleration sensors are used to measure human motion and gait during walking and other activities. Furthermore, we extracted a useful set of specific high-level features from raw sensor measurements based on time, frequency, and spatial domains for each application. As a result, we developed practical extensions to kernel-based discriminative learning to handle many kinds of structured data applied to human context recognition.Tiivistelmä Tässä työssä kehitettiin ja sovellettiin tilastollisen koneoppimisen ja hahmontunnistuksen menetelmiä anturipohjaiseen ihmiseen liittyvän tilannetiedon tunnistamiseen. Esitetyt menetelmät kuuluvat erottelevan oppimisen viitekehykseen, jossa ennustemalli sisääntulomuuttujien ja vastemuuttujan välille voidaan oppia suoraan tunnetuilla vastemuuttujilla nimetystä aineistosta. Parametrittomien erottelevien mallien oppimiseen käytettiin ydinmenetelmiä kuten tukivektorikoneita (SVM) ja Gaussin prosesseja (GP), joita voidaan pitää yhtenä modernin tilastollisen koneoppimisen tärkeimmistä menetelmistä. Työssä kehitettiin näihin menetelmiin liittyviä laajennuksia, joiden avulla rakenteellista aineistoa voidaan mallittaa paremmin reaalimaailman sovelluksissa, esimerkiksi tilannetietoisen laskennan sovellusalueella. Tutkimuksessa sovellettiin SVM- ja GP-menetelmiä moniluokkaisiin luokitteluongelmiin rakenteellisen monianturitiedon mallituksessa. Useiden tietolähteiden käsittelyyn esitetään menettely, joka yhdistää useat opetetut luokittelijat päätöstason säännöillä lopulliseksi malliksi. Tämän lisäksi aikasarjatiedon käsittelyyn kehitettiin uusi graafiesitykseen perustuva ydinfunktio sekä menettely sekventiaalisten luokkavastemuuttujien käsittelyyn. Nämä voidaan liittää modulaarisesti ydinmenetelmiin perustuviin erotteleviin luokittelijoihin. Lopuksi esitetään tekniikoita usean liikkuvan kohteen seuraamiseen. Menetelmät perustuvat anturitiedosta oppivaan GP-regressiomalliin ja partikkelisuodattimeen. Työssä esitettyjä menetelmiä sovellettiin kolmessa ihmisen liikkeisiin liittyvässä tilannetiedon tunnistussovelluksessa: henkilön biometrinen tunnistaminen, henkilöiden seuraaminen sekä aktiviteettien tunnistaminen. Näissä sovelluksissa henkilön asentoa, liikkeitä ja astuntaa kävelyn ja muiden aktiviteettien aikana mitattiin kahdella erilaisella paineherkällä lattia-anturilla sekä puettavilla kiihtyvyysantureilla. Tunnistusmenetelmien laajennuksien lisäksi jokaisessa sovelluksessa kehitettiin menetelmiä signaalin segmentointiin ja kuvaavien piirteiden irroittamiseen matalantason anturitiedosta. Tutkimuksen tuloksena saatiin parannuksia erottelevien mallien oppimiseen rakenteellisesta anturitiedosta sekä erityisesti uusia menettelyjä tilannetiedon tunnistamiseen

    Analysis of Time Domain Information for Footstep Recognition

    No full text
    Abstract. This paper reports an experimental analysis of footsteps as a biometric. The focus here is on information extracted from the time domain of signals collected from an array of piezoelectric sensors. Results are related to the largest footstep database collected to date, with almost 20,000 valid footstep signals and more than 120 persons, which is well beyond previous related databases. Three feature approaches have been extracted, the popular ground reaction force (GRF), the spatial average and the upper and lower contours of the pressure signals. Experimental workisbasedonaverificationmodewithaholisticapproachbasedon PCA and SVM, achieving results in the range of 5 to 15 % EER depending on the experimental conditions of quantity of data used in the reference models.

    Ontology-based framework for integration of time series data:application in predictive analytics on data center monitoring metrics

    No full text
    Abstract Monitoring a large and complex system such as a data center generates many time series of metric data, which are often stored using a database system specifically designed for managing time series data. Different, possibly distributed, databases may be used to collect data representing different aspects of the system, which complicates matters when, for example, developing data analytics applications that require integrating data from two or more of these. From the developer’s point of view, it would be highly convenient if all of the required data were available in a single database, but it may well be that the different databases do not even implement the same query language. To address this problem, we propose using an ontology to capture the semantic similarities among different time series database systems and to hide their syntactic differences. Alongside the ontology, we have developed a Python software framework that enables the developer to build and execute queries using classes and properties defined by the ontology. The ontology thus effectively specifies a semantic query language that can be used to retrieve data from any of the supported database systems, and the Python framework can be set up to treat the different databases as a single data store that can be queried using this semantic language. This is demonstrated by presenting an application involving predictive analytics on resource usage and electricity consumption metrics gathered from a Kubernetes cluster, stored in Prometheus and KairosDB databases, but the framework can be extended in various ways and adapted to different use cases, enabling machine learning research using distributed heterogeneous data sources

    Getting more out of small data sets:improving the calibration performance of isotonic regression by generating more data

    No full text
    Abstract Often it is necessary to have an accurate estimate of the probability that a classifier prediction is indeed correct. Many classifiers output a prediction score that can be used as an estimate of that probability but for many classifiers these prediction scores are not well calibrated. If enough training data is available, it is possible to post process these scores by learning a mapping from the prediction scores to probabilities. One of the most used calibration algorithms is isotonic regression. This kind of calibration, however, requires a decent amount of training data to not overfit. But many real world data sets do not have excess amount of data that can be set aside for calibration. In this work, we have developed a data generation algorithm to produce more data from a limited sized training data set. We used two variations of this algorithm to generate the calibration data set for isotonic regression calibration and compared the results to the traditional approach of setting aside part of the training data for calibration. Our experimental results suggest that this can be a viable option for smaller data sets if good calibration is essential

    Application of Iron Aluminides in the Combustion Chamber of Large Bore 2-Stroke Marine Engines

    No full text
    Iron aluminides possess a unique combination of properties such as attractive corrosion resistance in hot gas and wet chemical environments, a favorable strength to weight ratio, low costs of alloying elements, and they can be processed by conventional methods. For the current study, a promising iron aluminide (Fe-Al-Mo-Ti-B) was employed, which shows the potential to replace costly heat resistant steels or expensive Ni-based alloys for components in large bore two-stroke marine engines. The prechamber, an integral part of the combustion system of dual fuel two-stroke marine engines, which must withstand harsh conditions, was selected as the component. Prototypes made of the novel iron aluminide were manufactured via investment casting and hot isostatic pressing using powder of the intermetallic alloy. The high temperature oxidation behavior, the wet corrosion resistance in acid media, and the mechanical properties up to 700 °C were evaluated. A prototype of the prechamber was tested on a large bore two-stroke dual fuel test engine and post analysis of the tested component was performed. The results show that the employed iron aluminide alloy could be an economic alternative to the currently used Ni-based alloy

    Better classifier calibration for small datasets

    No full text
    Abstract Classifier calibration does not always go hand in hand with the classifier’s ability to separate the classes. There are applications where good classifier calibration, i.e., the ability to produce accurate probability estimates, is more important than class separation. When the amount of data for training is limited, the traditional approach to improve calibration starts to crumble. In this article, we show how generating more data for calibration is able to improve calibration algorithm performance in many cases where a classifier is not naturally producing well-calibrated outputs and the traditional approach fails. The proposed approach adds computational cost but considering that the main use case is with small datasets this extra computational cost stays insignificant and is comparable to other methods in prediction time. From the tested classifiers, the largest improvement was detected with the random forest and naive Bayes classifiers. Therefore, the proposed approach can be recommended at least for those classifiers when the amount of data available for training is limited and good calibration is essential
    corecore